Purchase Fuzzies and Utilons Separately
Yesterday:
There is this very, very old puzzle/observation in economics about the lawyer who spends an hour volunteering at the soup kitchen, instead of working an extra hour and donating the money to hire someone...
If the lawyer needs to work an hour at the soup kitchen to keep himself motivated and remind himself why he’s doing what he’s doing, that’s fine. But he should also be donating some of the hours he worked at the office, because that is the power of professional specialization and it is how grownups really get things done. One might consider the check as buying the right to volunteer at the soup kitchen, or validating the time spent at the soup kitchen.
I hold open doors for little old ladies. I can’t actually remember the last time this happened literally (though I’m sure it has, sometime in the last year or so). But within the last month, say, I was out on a walk and discovered a station wagon parked in a driveway with its trunk completely open, giving full access to the car’s interior. I looked in to see if there were packages being taken out, but this was not so. I looked around to see if anyone was doing anything with the car. And finally I went up to the house and knocked, then rang the bell. And yes, the trunk had been accidentally left open.
Under other circumstances, this would be a simple act of altruism, which might signify true concern for another’s welfare, or fear of guilt for inaction, or a desire to signal trustworthiness to oneself or others, or finding altruism pleasurable. I think that these are all perfectly legitimate motives, by the way; I might give bonus points for the first, but I wouldn’t deduct any penalty points for the others. Just so long as people get helped.
But in my own case, since I already work in the nonprofit sector, the further question arises as to whether I could have better employed the same sixty seconds in a more specialized way, to bring greater benefit to others. That is: can I really defend this as the best use of my time, given the other things I claim to believe?
The obvious defense—or perhaps, obvious rationalization—is that an act of altruism like this one acts as an willpower restorer, much more efficiently than, say, listening to music. I also mistrust my ability to be an altruist only in theory; I suspect that if I walk past problems, my altruism will start to fade. I’ve never pushed that far enough to test it; it doesn’t seem worth the risk.
But if that’s the defense, then my act can’t be defended as a good deed, can it? For these are self-directed benefits that I list.
Well—who said that I was defending the act as a selfless good deed? It’s a selfish good deed. If it restores my willpower, or if it keeps me altruistic, then there are indirect other-directed benefits from that (or so I believe). You could, of course, reply that you don’t trust selfish acts that are supposed to be other-benefiting as an “ulterior motive”; but then I could just as easily respond that, by the same principle, you should just look directly at the original good deed rather than its supposed ulterior motive.
Can I get away with that? That is, can I really get away with calling it a “selfish good deed”, and still derive willpower restoration therefrom, rather than feeling guilt about it being selfish? Apparently I can. I’m surprised it works out that way, but it does. So long as I knock to tell them about the open trunk, and so long as the one says “Thank you!”, my brain feels like it’s done its wonderful good deed for the day.
Your mileage may vary, of course. The problem with trying to work out an art of willpower restoration is that different things seem to work for different people. (That is: We’re probing around on the level of surface phenomena without understanding the deeper rules that would also predict the variations.)
But if you find that you are like me in this aspect—that selfish good deeds still work—then I recommend that you purchase warm fuzzies and utilons separately. Not at the same time. Trying to do both at the same time just means that neither ends up done well. If status matters to you, purchase status separately too!
If I had to give advice to some new-minted billionaire entering the realm of charity, my advice would go something like this:
To purchase warm fuzzies, find some hard-working but poverty-stricken woman who’s about to drop out of state college after her husband’s hours were cut back, and personally, but anonymously, give her a cashier’s check for $10,000. Repeat as desired.
To purchase status among your friends, donate $100,000 to the current sexiest X-Prize, or whatever other charity seems to offer the most stylishness for the least price. Make a big deal out of it, show up for their press events, and brag about it for the next five years.
Then—with absolute cold-blooded calculation—without scope insensitivity or ambiguity aversion—without concern for status or warm fuzzies—figuring out some common scheme for converting outcomes to utilons, and trying to express uncertainty in percentage probabilitiess—find the charity that offers the greatest expected utilons per dollar. Donate up to however much money you wanted to give to charity, until their marginal efficiency drops below that of the next charity on the list.
I would furthermore advise the billionaire that what they spend on utilons should be at least, say, 20 times what they spend on warm fuzzies—5% overhead on keeping yourself altruistic seems reasonable, and I, your dispassionate judge, would have no trouble validating the warm fuzzies against a multiplier that large. Save that the original, fuzzy act really should be helpful rather than actively harmful.
(Purchasing status seems to me essentially unrelated to altruism. If giving money to the X-Prize gets you more awe from your friends than an equivalently priced speedboat, then there’s really no reason to buy the speedboat. Just put the money under the “impressing friends” column, and be aware that this is not the “altruism” column.)
But the main lesson is that all three of these things—warm fuzzies, status, and expected utilons—can be bought far more efficiently when you buy separately, optimizing for only one thing at a time. Writing a check for $10,000,000 to a breast-cancer charity—while far more laudable than spending the same $10,000,000 on, I don’t know, parties or something—won’t give you the concentrated euphoria of being present in person when you turn a single human’s life around, probably not anywhere close. It won’t give you as much to talk about at parties as donating to something sexy like an X-Prize—maybe a short nod from the other rich. And if you threw away all concern for warm fuzzies and status, there are probably at least a thousand underserved existing charities that could produce orders of magnitude more utilons with ten million dollars. Trying to optimize for all three criteria in one go only ensures that none of them end up optimized very well—just vague pushes along all three dimensions.
Of course, if you’re not a millionaire or even a billionaire—then you can’t be quite as efficient about things, can’t so easily purchase in bulk. But I would still say—for warm fuzzies, find a relatively cheap charity with bright, vivid, ideally in-person and direct beneficiaries. Volunteer at a soup kitchen. Or just get your warm fuzzies from holding open doors for little old ladies. Let that be validated by your other efforts to purchase utilons, but don’t confuse it with purchasing utilons. Status is probably cheaper to purchase by buying nice clothes.
And when it comes to purchasing expected utilons—then, of course, shut up and multiply.
- Can we help individual people cost-effectively? Our trial with three sick kids by 20 Feb 2024 9:43 UTC; 396 points) (EA Forum;
- Against Almost Every Theory of Impact of Interpretability by 17 Aug 2023 18:44 UTC; 322 points) (
- Changing the world through slack & hobbies by 21 Jul 2022 18:11 UTC; 260 points) (
- Effectiveness is a Conjunction of Multipliers by 25 Mar 2022 18:44 UTC; 258 points) (EA Forum;
- Book Review: Going Infinite by 24 Oct 2023 15:00 UTC; 241 points) (
- Efficient Charity: Do Unto Others... by 24 Dec 2010 21:26 UTC; 206 points) (
- Write a Book? by 16 Mar 2023 0:11 UTC; 188 points) (EA Forum;
- Firewalling the Optimal from the Rational by 8 Oct 2012 8:01 UTC; 171 points) (
- RAISE post-mortem by 24 Nov 2019 16:19 UTC; 161 points) (
- Optimizing Fuzzies And Utilons: The Altruism Chip Jar by 1 Jan 2011 18:53 UTC; 138 points) (
- Problems with free services for EA projects by 3 Aug 2023 15:28 UTC; 134 points) (EA Forum;
- Changing the world through slack & hobbies by 21 Jul 2022 18:01 UTC; 130 points) (EA Forum;
- Interesting vs. Important Work—A Place EA is Prioritizing Poorly by 28 Jul 2022 10:08 UTC; 115 points) (EA Forum;
- EA is a Career Endpoint by 14 May 2021 23:58 UTC; 109 points) (EA Forum;
- How to Save the World by 1 Dec 2010 17:17 UTC; 103 points) (
- My Personal Priorities, Charity, Judaism, and Effective Altruism by 1 Dec 2023 11:23 UTC; 96 points) (EA Forum;
- Efficient charity: do unto others... by 3 Sep 2013 4:40 UTC; 82 points) (EA Forum;
- The Case for a Bigger Audience by 9 Feb 2019 7:22 UTC; 68 points) (
- National EA groups shouldn’t focus on city groups by 5 Jun 2023 16:01 UTC; 67 points) (EA Forum;
- What (standalone) LessWrong posts would you recommend to most EA community members? by 9 Feb 2022 0:31 UTC; 67 points) (EA Forum;
- Virtue signaling is sometimes the best or the only metric we have by 28 Apr 2022 4:51 UTC; 61 points) (EA Forum;
- It’s okay to be (at least a little) irrational by 12 Apr 2009 21:06 UTC; 61 points) (
- Book Review: Going Infinite by 25 Oct 2023 19:08 UTC; 55 points) (EA Forum;
- The impact merge by 13 Jan 2021 7:26 UTC; 48 points) (
- Giving more won’t make you happier by 10 Dec 2018 18:15 UTC; 47 points) (EA Forum;
- The End (of Sequences) by 27 Apr 2009 21:07 UTC; 46 points) (
- Principled Satisficing To Avoid Goodhart by 16 Aug 2024 19:05 UTC; 45 points) (
- The Craft and the Community by 26 Apr 2009 17:52 UTC; 45 points) (
- Write a Book? by 16 Mar 2023 0:10 UTC; 45 points) (
- Donor-Advised Funds vs. Taxable Accounts for Patient Donors by 19 Oct 2020 20:38 UTC; 44 points) (EA Forum;
- 20 Mar 2021 6:43 UTC; 43 points) 's comment on Please stand with the Asian diaspora by (EA Forum;
- Ethical offsetting is antithetical to EA by 5 Jan 2016 17:49 UTC; 43 points) (EA Forum;
- Efficient Charity by 4 Dec 2010 10:27 UTC; 42 points) (
- Selecting Rationalist Groups by 2 Apr 2009 16:21 UTC; 42 points) (
- 18 Feb 2012 1:33 UTC; 42 points) 's comment on Quantified Health Prize results announced by (
- Virtue signaling is sometimes the best or the only metric we have by 28 Apr 2022 4:52 UTC; 41 points) (
- Binding Fuzzies and Utilons Together by 25 Apr 2020 16:28 UTC; 40 points) (EA Forum;
- [LINK] Why I’m not on the Rationalist Masterlist by 6 Jan 2014 0:16 UTC; 40 points) (
- What Is Optimal Philanthropy? by 12 Jul 2012 0:17 UTC; 39 points) (
- Introduction to Non-EA International Charity Orgs. by 2 Jun 2022 11:33 UTC; 38 points) (EA Forum;
- Some rules for life (v.0,0) by 17 Aug 2023 0:43 UTC; 38 points) (
- Politics as Charity by 23 Sep 2010 5:33 UTC; 37 points) (
- Does any thorough discussion of moral parliaments exist? by 6 Sep 2019 15:33 UTC; 36 points) (EA Forum;
- Yvain’s most important articles by 16 Aug 2015 8:27 UTC; 35 points) (
- 12 Jan 2014 6:27 UTC; 35 points) 's comment on Why I haven’t signed up for cryonics by (
- 3 Jun 2020 21:20 UTC; 33 points) 's comment on EA Handbook, Third Edition: We want to hear your feedback! by (EA Forum;
- In Praise of Maximizing – With Some Caveats by 15 Mar 2015 19:40 UTC; 32 points) (
- Friendship is Optimal: EAGs should be online by 2 Sep 2022 0:00 UTC; 31 points) (EA Forum;
- It’s OK To Also Donate To Non-EA Causes by 2 Jun 2020 20:24 UTC; 31 points) (EA Forum;
- 28 Feb 2012 2:36 UTC; 30 points) 's comment on Mike Darwin on the Less Wrong intelligentsia by (
- Heuristics and Biases in Charity by 2 Mar 2012 15:20 UTC; 30 points) (
- The impact merge by 13 Jan 2021 7:26 UTC; 29 points) (EA Forum;
- 24 Feb 2023 17:51 UTC; 29 points) 's comment on Consent Isn’t Always Enough by (
- Choose To Be Happy by 1 Jan 2011 22:50 UTC; 29 points) (
- EA reading list: EA motivations and psychology by 3 Aug 2020 9:24 UTC; 28 points) (EA Forum;
- Probability and Politics by 24 Nov 2010 17:02 UTC; 28 points) (
- How do you decide how much to donate in “user fees” for free services? by 5 Aug 2021 5:29 UTC; 26 points) (EA Forum;
- 6 Jul 2009 19:43 UTC; 26 points) 's comment on The Dangers of Partial Knowledge of the Way: Failing in School by (
- 25 Mar 2022 21:23 UTC; 23 points) 's comment on Effectiveness is a Conjunction of Multipliers by (EA Forum;
- Call for volunteers: Publishing the Sequences by 28 Jun 2012 15:08 UTC; 22 points) (
- 10 Oct 2012 17:56 UTC; 22 points) 's comment on Abandoning Cached Selves to Re-Write My Source Code Partially, I’ve Become Unstable by (
- On Charities and Linear Utility by 4 Feb 2011 14:13 UTC; 21 points) (
- Morality is not about willpower by 8 Oct 2011 1:33 UTC; 20 points) (
- Introducing The Life You Can Save mobile app! by 10 Jul 2019 9:46 UTC; 19 points) (EA Forum;
- Philosophy event series introducing EA at the University of Zurich by 4 Jan 2022 12:49 UTC; 19 points) (EA Forum;
- Where Some People Donated in 2017 by 11 Feb 2018 21:55 UTC; 18 points) (EA Forum;
- $100 for the best article on efficient charity—deadline Wednesday 1st December by 24 Nov 2010 22:31 UTC; 17 points) (
- 24 Apr 2011 19:46 UTC; 17 points) 's comment on What To Do: Environmentalism vs Friendly AI (John Baez) by (
- Competition to write the best stand-alone article on efficient charity by 21 Nov 2010 16:57 UTC; 16 points) (
- Is Effective Volunteering Possible? by 19 May 2023 12:41 UTC; 15 points) (EA Forum;
- 27 Feb 2013 17:11 UTC; 15 points) 's comment on When should you give to multiple charities? by (
- Scott Sumner on Utility vs Happiness [Link] by 12 Jan 2011 20:14 UTC; 14 points) (
- 20 Jan 2011 20:22 UTC; 14 points) 's comment on Theists are wrong; is theism? by (
- 1 Jul 2010 2:01 UTC; 14 points) 's comment on A Challenge for LessWrong by (
- 10 Aug 2022 23:38 UTC; 13 points) 's comment on Emrik’s Quick takes by (EA Forum;
- 13 Nov 2013 9:36 UTC; 13 points) 's comment on How to have high-value conversations by (
- 12 Aug 2010 17:58 UTC; 13 points) 's comment on Should I believe what the SIAI claims? by (
- 1 Jan 2011 18:53 UTC; 13 points) 's comment on Optimizing Fuzzies And Utilons: The Altruism Chip Jar by (
- 15 Feb 2013 9:50 UTC; 13 points) 's comment on Memetic Tribalism by (
- Is Effective Volunteering Possible? by 19 May 2023 12:41 UTC; 13 points) (
- Efficient philanthropy: local vs. global approaches by 16 Jun 2011 4:21 UTC; 13 points) (
- 27 Feb 2010 7:24 UTC; 13 points) 's comment on The Last Days of the Singularity Challenge by (
- 9 Jun 2020 1:30 UTC; 12 points) 's comment on What are some good charities to donate to regarding systemic racial injustice? by (EA Forum;
- 26 Sep 2016 23:17 UTC; 12 points) 's comment on All causes are EA causes by (EA Forum;
- 14 Oct 2013 15:03 UTC; 12 points) 's comment on Open Thread, October 13 − 19, 2013 by (
- [linkpost] The Psychological Economy of Inaction by William Gillis by 12 Sep 2021 22:04 UTC; 12 points) (
- 25 Dec 2010 2:47 UTC; 12 points) 's comment on Efficient Charity: Do Unto Others... by (
- 30 Dec 2022 1:27 UTC; 11 points) 's comment on What the heck happened to EA’s public image? by (EA Forum;
- Efficient Charity: Do Unto Others... by 24 Dec 2010 20:16 UTC; 11 points) (
- Assorted thoughts on the coronavirus by 18 Mar 2020 7:08 UTC; 11 points) (
- Hedonic vs Preference Utilitarianism in the Context of Wireheading by 29 Jun 2012 13:50 UTC; 11 points) (
- Linkpost for “Organizations vs. Getting Stuff Done” and discussion of Zvi’s post about SFF and the S-process (or; Doing Actual Thing) by 15 Dec 2021 14:16 UTC; 10 points) (EA Forum;
- 17 Jan 2020 16:05 UTC; 10 points) 's comment on Reality-Revealing and Reality-Masking Puzzles by (
- Rationality Reading Group: Part Z: The Craft and the Community by 4 May 2016 23:03 UTC; 10 points) (
- 18 Jun 2014 13:28 UTC; 9 points) 's comment on Rationalist Sport by (
- 22 Mar 2024 11:30 UTC; 8 points) 's comment on Jordan Arel’s Quick takes by (EA Forum;
- 14 Dec 2010 1:00 UTC; 8 points) 's comment on What topics would you like to see more of on LessWrong? by (
- 5 Dec 2020 8:30 UTC; 7 points) 's comment on A Case Study in Newtonian Ethics—Kindly Advise by (EA Forum;
- [SEQ RERUN] Purchase Fuzzies and Utilons Separately by 8 Apr 2013 5:01 UTC; 7 points) (
- 14 Feb 2010 22:51 UTC; 7 points) 's comment on Shut Up and Divide? by (
- 23 Jun 2020 11:13 UTC; 7 points) 's comment on Bob Jacobs’s Shortform by (
- 25 Apr 2022 11:06 UTC; 7 points) 's comment on Donald Hobson’s Shortform by (
- 6 Sep 2018 20:25 UTC; 6 points) 's comment on Which piece got you more involved in EA? by (EA Forum;
- 5 Aug 2020 20:48 UTC; 6 points) 's comment on EA reading list: EA motivations and psychology by (EA Forum;
- 12 Jul 2020 12:16 UTC; 6 points) 's comment on You have more than one goal, and that’s fine by (EA Forum;
- 14 May 2012 19:26 UTC; 6 points) 's comment on Thoughts on the Singularity Institute (SI) by (
- 5 Sep 2013 1:59 UTC; 6 points) 's comment on Should We Tell People That Giving Makes Them Happier? by (
- 22 Oct 2020 19:24 UTC; 6 points) 's comment on AllAmericanBreakfast’s Shortform by (
- 2 Aug 2019 16:50 UTC; 5 points) 's comment on Tetraspace Grouping’s Shortform by (
- 3 May 2010 20:22 UTC; 5 points) 's comment on Human values differ as much as values can differ by (
- 6 Apr 2013 13:27 UTC; 5 points) 's comment on The Unintuitive Power Laws of Giving by (
- Is fundraising through my hobbies an effective use of my time? by 23 Jan 2020 6:14 UTC; 4 points) (EA Forum;
- Adquira sentimentos calorosos e útilons separadamente by 20 Jul 2023 18:48 UTC; 4 points) (EA Forum;
- 15 Oct 2010 7:06 UTC; 4 points) 's comment on LW favorites by (
- 1 Jul 2010 17:50 UTC; 4 points) 's comment on A Challenge for LessWrong by (
- 22 Jun 2012 4:12 UTC; 4 points) 's comment on Altruistic Kidney Donation by (
- 15 Sep 2016 16:40 UTC; 3 points) 's comment on Is not giving to X-risk or far future orgs for reasons of risk aversion selfish? by (EA Forum;
- 13 Jun 2022 6:19 UTC; 3 points) 's comment on You Don’t Need To Justify Everything by (EA Forum;
- 3 Oct 2014 0:19 UTC; 3 points) 's comment on Effective altruism as the most exciting cause in the world by (EA Forum;
- 16 Mar 2023 18:24 UTC; 3 points) 's comment on A BOTEC-Model for Comparing Impact Estimations in Community Building by (EA Forum;
- 25 Sep 2022 22:08 UTC; 3 points) 's comment on ‘Where are your revolutionaries?’ Making EA congenial to the social justice warrior. by (EA Forum;
- 15 May 2022 13:53 UTC; 3 points) 's comment on Progress studies as an (incomplete) “idea machine” by (Progress Forum;
- 14 Aug 2010 0:21 UTC; 3 points) 's comment on Against Cryonics & For Cost-Effective Charity by (
- 24 Apr 2009 5:00 UTC; 3 points) 's comment on This Didn’t Have To Happen by (
- 16 Oct 2009 4:35 UTC; 3 points) 's comment on PredictionBook.com—Track your calibration by (
- 5 Oct 2022 6:19 UTC; 2 points) 's comment on We’re still (extremely) funding constrained (but don’t let fear of getting funding stop you trying). by (EA Forum;
- 27 Dec 2022 13:54 UTC; 2 points) 's comment on David M’s Quick takes by (EA Forum;
- 20 Dec 2020 10:58 UTC; 2 points) 's comment on Where are you donating in 2020 and why? by (EA Forum;
- 7 Sep 2022 10:00 UTC; 2 points) 's comment on EA vs. FIRE – reconciling these two movements by (EA Forum;
- 6 Feb 2011 14:37 UTC; 2 points) 's comment on On Charities and Linear Utility by (
- 3 Jun 2012 13:00 UTC; 2 points) 's comment on Rationality Quotes June 2012 by (
- 16 Apr 2013 22:54 UTC; 2 points) 's comment on Pay other people to go vegetarian for you? by (
- 19 Mar 2012 22:14 UTC; 2 points) 's comment on I’m starting a game company and looking for a co-founder. by (
- 6 Jul 2011 17:01 UTC; 2 points) 's comment on Richard Dawkins on vivisection: “But can they suffer?” by (
- 5 Oct 2013 3:34 UTC; 2 points) 's comment on Thought experiment: The transhuman pedophile by (
- 24 Jul 2011 18:25 UTC; 2 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 17 Dec 2010 16:33 UTC; 2 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 4 Oct 2013 21:19 UTC; 2 points) 's comment on Open Thread, September 30 - October 6, 2013 by (
- 満足感と効用は別々に手に入れる by 16 Jul 2023 8:44 UTC; 1 point) (EA Forum;
- “Of Human Bondage” and Morality by 31 Aug 2022 17:58 UTC; 1 point) (EA Forum;
- 14 Nov 2018 2:57 UTC; 1 point) 's comment on the role of ART in EA? by (EA Forum;
- 25 Jul 2013 20:50 UTC; 1 point) 's comment on Open thread, July 16-22, 2013 by (
- 5 Aug 2011 20:40 UTC; 1 point) 's comment on I just donated to the SIAI. by (
- 12 Jul 2011 5:53 UTC; 1 point) 's comment on Should Rationalists Tip at Restaurants? by (
- 20 Aug 2012 5:07 UTC; 1 point) 's comment on [Link] Reddit, help me find some peace I’m dying young by (
- 27 Jun 2011 3:17 UTC; 1 point) 's comment on What can we gain from rationality? by (
- 14 Jul 2009 23:56 UTC; 1 point) 's comment on Our society lacks good self-preservation mechanisms by (
- 20 May 2011 6:02 UTC; 1 point) 's comment on Sunk Cost Fallacy by (
- 1 Jul 2010 8:14 UTC; 1 point) 's comment on A Challenge for LessWrong by (
- 23 Apr 2018 14:14 UTC; 0 points) 's comment on Open Thread #39 by (EA Forum;
- 6 Aug 2017 15:16 UTC; 0 points) 's comment on Blood Donation: (Generally) Not That Effective on the Margin by (EA Forum;
- 14 Mar 2014 21:56 UTC; 0 points) 's comment on On not diversifying charity by (
- 24 Apr 2011 16:42 UTC; 0 points) 's comment on Offense versus harm minimization by (
- 10 Feb 2010 12:48 UTC; 0 points) 's comment on Shut Up and Divide? by (
- 14 Aug 2011 21:24 UTC; 0 points) 's comment on Cryonics is Quantum Suicide (minus the suicide) by (
- 27 Jan 2013 19:53 UTC; 0 points) 's comment on Banish the Clippy-creating Bias Demon! by (
- 24 Apr 2014 11:08 UTC; 0 points) 's comment on Book Review: So Good They Can’t Ignore You, by Cal Newport by (
- 16 Oct 2014 3:36 UTC; 0 points) 's comment on Open thread, Oct. 13 - Oct. 19, 2014 by (
- 17 Mar 2013 10:40 UTC; 0 points) 's comment on People who “don’t rationalize”? [Help Rationality Group figure it out] by (
- Anatomy of a full school climate strike by 22 Aug 2021 15:17 UTC; 0 points) (
- 11 Oct 2012 9:05 UTC; 0 points) 's comment on Abandoning Cached Selves to Re-Write My Source Code Partially, I’ve Become Unstable by (
- Externally Oriented, Love, Sensational. Which Rationality will LW be about? by 2 Jan 2011 22:22 UTC; -2 points) (
- Externally Oriented, Love, Sensational. Which Rationality will LW be about? by 2 Jan 2011 22:18 UTC; -3 points) (
- Anger To Warm Fuzzies With DNS by 4 Jan 2012 19:25 UTC; -4 points) (
- Deontological Decision Theory and The Solution to Morality by 10 Jan 2011 16:15 UTC; -8 points) (
I’m amused and relieved to have finally followed the “shut up and multiply” link—dozens of prior allusions left me puzzled at the advice to multiply in the biblical sense. I’d always felt it a bit cultish to win by having more (indoctrinated) babies :)
This may well have made my day.
Today I overhead a man in the supermarket telling his wife that maybe they should buy some lottery tickets, and I was reminded of Eliezer’s “opening doors for little old ladies” line (which he repeated in his recent video answers).
Isn’t buying lottery tickets also a form of purchasing warm fuzzies? I’m not sure that opening doors for little old ladies is any more defensible for a utilitarian than buying lottery tickets is for a rationalist.
To expand on this comparison a bit more, one important difference between the two is that once a person understands the concept of expected value, and knows that lottery tickets have expected value below purchase price, the warm-fuzzy effect largely goes away. But for some reason, at least for Eliezer, the warm-fuzzy effect of opening a door for an old lady doesn’t go away, even though he knows that doing so creates negative expected utilons.
Perhaps the warm-fuzzy effect remains because Eliezer rationalizes it thus: if I can restore my willpower through the warm-fuzzy effect of opening doors for little old ladies, I can be more productive in producing utilons through my work, so it’s really a good thing after all, and I deserve the warm-fuzzy effect. But perhaps a rationalist can use a similar line of thought to keep the warm-fuzzy effect of buying lottery tickets. Should one do so?
ETA: Apparently Eliezer already addressed the issue of lottery tickets, with the following conclusion:
Which seems completely inconsistent with the position he takes here...
Humans are social animals.
Buying lottery tickets seems less likley to trigger ancestral envrionment reward circuitry than having a positive interaction with another person. Windfall from the capricious environment seems a worse bet than good will towards you in a small tribe where word gets around. This is even completely ignoring the plausible root of most altruism in kin selection.
The difference seems to be that the appeal of lottery tickets is already a change from the baseline (in the wrong direction), caused by confusion, and so it’s easier to retract this appeal by understanding the situation. Removing more immediate inbuilt drives is on the other hand infeasible.
I know this is an old comment, but...
I think holding open doors for old ladies is not only defensible, but entirely practical for utilitarians.
First, there’s plenty of research suggesting that little actions like this can have significant spillover effects on our attitudes for some time after . Second, how exactly are you going to convert that handful of seconds into a higher utility payoff? Are you going to stay at work for an extra hour so that, if you run into some people in need of assistance after you leave, you can not help them? Are you going to stand there on the other side of the door and think about important AI problems while the old lady struggles to open it?
Time is a great deal less fungible than money.
I visualized this scenario and laughed out loud.
Note: This comment is off-topic.
I consider opening the door for a frail old lady roughly equivalent to opening the door for a perfectly healthy but heavily encumbered young lady. My utility function includes a term for them, and the change in that term outweighs the change in the term for me.
I think you’re missing the point here, Robin. Have you read Eliezer’s post that my comment is filed under? Please do that if you haven’t.
...right—ignore my comment.
Kiva.org has the distinct honor of being the only charity that has ensured me maximum utilons for my money with an unexpected bonus of most fuzzies experienced ever. Seeing my money being repaid and knowing that it was possible only because my charity dollars worked, that the recipient of my funds actually put the dollars to effective use enough to thrive and pay back my money, well, goddamn it felt good.
kiva feels suspiciously well-optimized on three counts—there’s the utilons (which, given that you’re incentivizing industry and entrepreneurship, are pretty darn good), the warm fuzzies you mentioned, and the fact that it seems it could also help me overcome some akrasia with regards to savings. If I loan money out of my paycheck to kiva each month, and reinvest all money repaid, then (assuming a decent repayment rate), the money cycling should tend to increase, meaning that if I need to, say, put a down payment on a house one day, I can take some out, knowing it’s already done good.
I feel very suspicious of my mind for being convinced this plan is optimal along one dimension, and extremely strong along two others. It doesn’t seem as though it should be so easy. If I’m missing something (along any dimension), please feel free to tell me.
I second the suspicious feeling. It boils down to one question: if Kiva is such a great option, why is it not more popular?
I begin to suspect that rationalists should simply delete this question from their mental vocabularies. Most popular things are optimized to be popular with an audience that doesn’t know how to resist manipulation (but thinks itself invincible, in accordance with the bias blind-spot bias); this gives rise to a case of the majority is always wrong.
I’m not sure that applies here: for QWERTY keyboards network effects are positive—the more people use them, the better (i.e. in this case, more convenient) it is for me to use them, but for charities they are positive (so long as my social circles aren’t hipster enough) for status, but neutral (for me at least—YMMV) for fuzzies and negative (diminishing returns, finite room for more funding) for utilons.
But—if you were optimizing strictly for fuzzies—could you have gotten even more fuzzies by giving less money to one recipient in person and tracking their outcome in person?
This is probably too late to be read by anyone, but here is a column by Steven Landsburg, my favorite economist, which says essentially the same thing; he simply phrases it as “don’t diversify your charity ‘portfolio’: shut up, multiply, and give your whole charity budget to the most deserving one”.
“Utilons” isn’t quite the right word: utilons are all I purchase. My utility function is a sum of components: I can decompose it into a local part to do with my happiness and the happiness of those close to me (and thus status, warm fuzzies and the like) and a global part to do with things like the lives of strangers and the future of humanity. I try to strongly mark the boundary between those two, so I don’t for example value the lives of people in the same country as me more than those in different countries.
You’re saying I can more optimally spend resources on efforts that clearly serve one or the other than on efforts that try to do both and do neither well, and I agree, I’d just phrase it differently: purchase big-picture utility and small-picture utility separately.
Perhaps Eliezer doesn’t directly value fuzzies or status, so when he is purchasing them he isn’t purchasing utilons directly. Rather, he is purchasing motivation to continue doing things which directly purchase utilons. In other words, he doesn’t really want fuzzies, but if he doesn’t buy any, he’ll lose his motivation to be altruistic altogether. So he buys fuzzies to keep his motivation up which allows him to keep directly purchasing utilons—the things he does actually value. That’s at least how I read it.
edit: clarity
Sum of components, product, or more complex functions.
Just a sum, I think; my understanding of utility is that it’s part of the definition that it makes sense to sum it.
I think Eliezer is using “utilon” to refer to the unit of value in utilitarianism (i.e., the theory of aggregating value linearly across individuals) whereas what you’re talking about is probably the unit of value in expected utility maximization (i.e., the theory of aggregating value linearly across possible worlds). To avoid confusion, I propose that we call the latter “utils” (which as dreeves pointed out is already standard for this usage). In other words, let’s use “utilons” when talking about utilitarianism, and “utils” when talking about expected utility maximization.
Would you agree that we’re all (in this thread) drawing the same distinction and just labelling it differently?
I agree that we should switch to the standard term “utils” here, but I think I wouldn’t go for also using “utilons” in the way you propose; enough people would continue to use the words in a different way that we wouldn’t succeed in ironing out the confusion. I’d prefer something like “global utils” and “fuzzy utils/fuzzies” which can’t be taken to mean something else.
So you see warm fuzzies, status boost, and societal good as subtypes of the utilon output of altruistic activities? Interesting.
It’s more about what you use the word “utilon” for; I see “utilon” as the measure of whatever consequences I value, but EY is using it here to refer to utility without weighting for proximity to the speaker, which I call “big picture utility” above.
By coincidence, I already do this—donating two hours a week to a local charity for children with learning difficulties, and donating cash to the Gates foundation (they seem much better qualified than me to calculate the expected return on charity investement), and a portion of the charity donations from my upcoming wedding is earmarked for the “establishement at which Eliezer works”. I actually did it following the logic of this post, so it wan’t a coincidence either.
This may be the good reaction for rationalists, but how do you feel about it, Eliezer, from your position in the non-profit sector? Do you think you should be teaching people to divide up their fuzzies and utilons, or making your non-profit more fuzzy?
Givewell and its recommended charities probably strictly dominate the Gates foundation, except possibly for affiliation benefits with Gates (like cheering a sports team with a powerful star player). Gates isn’t obviously cool enough to want to affiliate with publicly though. Clinton giving initiative is probably a better choice.
Wedding? Yeah, probably do that right. I may have made a mistake by not doing so and thus greatly antagonizing my blood relatives.
Nameless org. Consider contacting me before large donations so I can inform you of any ways to leverage them, and keep in touch via their blog.
I’m wondering a little bit about all this beating-around-the-bush: I know what nameless org is a reference to, but it seems likely that some readers won’t. You guys are talking about the forbidden topic (and Eliezer touches on it indirectly), so I’m not sure how the spirit of the ban is being fulfilled. Can we speak in plain terms now that April has arrived, or have I forgotten its expiration date?
It expires in May. See the About page.
um, is it ok to ask what is this forbidden topic on a nameless org, now that the equally unexplained ban has been expired for seven months… ? :)
(the about page does not appear enlightening)...
http://lesswrong.com/lw/ye/and_say_no_more_of_it/
Edit: The “nameless org” of course is SIAI.
Thank you :)
I had guessed that SIAI was the likely answer… but had very little evidence beyond the fact that it’s an org that is strongly correlated with EY :)
Man doesn’t live by warmth, status and altruism alone however.
For a more comprehensive list of things that may contribute to efficacy I would suggest Aristotle’s Ethics or Tim Ferris’ blog and 4 Hour Work Week.
Worth pointing out that one probably has to mix up the warmth and status generating activities quite a bit to avoid diminishing returns too. The first large check you give away will surely be very effective, after the fifth, maybe not. Tipping generously when its appropriate will always buy status and warmth with Prospect Theory’s predicted effective multiplier for small numbers around a reference point.
Do you view this more as “Your true utility isoclines are concave in the plane of utilons vs. fuzzies”, or, “You are not a rational utility-maximizer”?
I would have made this into a longer post, but it works much better appended to this one:
It’s clear that you can’t just make willpower appear with a snap of your fingers, so I consider fuzzies to be utilons for many human utility functions. However, utilitarians have it even better—if they get fuzzies by giving fuzzies to someone else, they get to count all of the fuzzies generated as utilons. I urge people focused on being effective utilitarians to keep this in mind if they feel like they’re running low on fuzzies.
I think you meant they should count all the utilons generated as fuzzies?
Thanks, Eliezer!
This one was actually news to me. Separately is more efficient, eh? Hmmm… now I get to rethink my actions.
I had deliberately terminated my donations to charities that seemed closer to “rescuing lost puppies”. I had also given up personal volunteering (I figured out {work—earn—donate} before I heard it here.) And now I’m really struggling with akrasia / procrastination / laziness /rebellion / escapism.
“You could, of course, reply that you don’t trust selfish acts that are supposed to be other-benefiting as an “ulterior motive” ”. That’s a poisonous meme that runs in my brain. But I consciously declare that to be nonsense. I don’t ever want to discuss “pure altruism” ever again! I applaud ulterior motives, “Just so long as people get helped.” If you can figure out your ulterior motives, use them! Put them in harness. You might as well, they aren’t going away.
I’m glad I’m an egoist and don’t have to worry about stuff like this.
Downvoted because you’re an egoist.
Downvoted because that’s not a good reason to downvote a comment.
Maybe relevant to this post: the googolplex dust specks issue seems to be settled by nonlinearity/proximity.
Other people’s suffering is non-additive because we value different people differently. The pain of a relative matters more to me than the pain of a stranger. A googolplex people can’t all be important to me because I don’t have enough neural circuitry for that. (Monkeysphere is about 150 people.) This means each subsequent person-with-dust-speck means less to me than the previous one, because they’re further from me. The infinite sum may converge to a finite value that I feel is smaller than 50 years of torture.
It seems that to shut up and multiply, an altruist/rationalist needs to accept a non-obvious axiom that each person’s joy or suffering carries equal weight regardless of proximity to the altruist. I for one refuse to accept this axiom because it’s immoral to me; think about it.
I have the exact opposite intuition. It is not obvious at all to me that the closeness (emotionally or physically) to someone changes the weight of their suffering. If someone is going to get their fingers slammed in a door, then it matters not should I know them personally or be a thousand light years distant.
Admittedly, I may have a slightly more visceral reaction if someone I know gets in a car wreck than looking at the statistics, but I disagree that means it is Right for me to prevent that car wreck of someone close, only to thereby cause another and in addition lead someone to stub their toe.
Where they are does not change their suffering, but perhaps it changes the weight of your obligation to do something about it?
In social situations perhaps. But that’s only because you can’t physically act or it is more optimal economically and logistically for everyone to manage their own sphere of influence. If you have in front of you two buttons and you must press one, this changes nothing.
Quite a few holes here… You don’t need any proximity axiom for the googolplex. The person to be tortured can be made more remote than any of those suffering from dust specks (if you insist on mentioning proximity, consider the balancing between a googolplex squared of dust speck sufferers versus a mere googolplex of torture victims).
(I personally reject the googplex dust speck argument simply because I don’t consider a single dust speck to amount to a suffering; I accept the argument at about the level of a toe stubing that would be still felt the next day)
There are two ways you might be wrong. First, the neg-utility of dust specks could approach zero as distance increases, and the neg-utility of torture could approach a nonzero value that’s greater than the sum of infinitely many dust specks. Second, I could imagine accepting torture if the victim were sufficiently neurologically distant from me, say on the empathetic level of a fictional character. (Neurological distance is, more or less, the degree of our gut acknowledgement that a given person actually exists. The existence of a googolplex people is quite a leap of faith.) Take your pick.
I still believe proximity solves the dust speck and Pascal’s mugging parables. Well, not quite “solves”: proximity gives a convincing rationalization to the common-sense decision of a normal person that rationalism so cleverly argues against. Unfortunately scholastics without experiment can’t “solve” a problem in any larger sense.
I don’t see why anyone would think the dust speck problem is a problem. The simplest solution seems to be to acknowledge that suffering (and other utility, positive or negative) isn’t additive. Is there some argument that it is or should be?
Well, you’re right, but I wasn’t completely satisfied by such a blunt argument and went on to invent an extra layer of rationalization: justify non-additivity with proximity. Of course none of this matters except as a critique of the “shut up and multiply” maxim. I wouldn’t want to become a utility-additive mind without proximity modifiers. Maybe Eliezer would; who knows.
I may be straying from your main point here, but...
Could you really utilize these 60 seconds in a better, more specialized way? Not any block of 60 seconds—these specific 60 seconds, that happened during your walk.
Had you not encountered that open trunk, would you open your laptop in the middle of that walk and started working on a world changing idea or an important charity plan? Unlikely—if that was the case you were already sitting somewhere working on that. You went out for a walk, not for work.
Would you, had you not encountered that open trunk, finish your walk 60 seconds earlier, went to sleep 60 seconds earlier, woke up 60 seconds earlier, started your workday 60 seconds earlier, and by doing all that moved these 60 seconds to connect with your regular productivity time? This is probably not the case either—if it was, that would mean you intentionally used that hard earned fuzz as an excuse to deliberately take one minute off your workday, and that would take small mindedness you do not seem to possess.
No—that act was an Action of Opportunity. Humans don’t usually have a schedule to tight and so accurate that every lost minute messes it up. There is room for leeway, where you can push such gestures without compromising your specialized work.
I follow the virtue-ethics approach, I do actions that make me like the person that I want to be. The acquisition of any virtue requires practice, and holding open the door for old ladies is practice for being altruistic. If I weren’t altruistic, then I wouldn’t be making myself into the person I want to be.
It’s a very different framework from util maximization, but I find it’s much more satisfying and useful.
I’ve realized that my sibling comment is logically rude, because I’ve left out some relevant detail. Most relevantly, I tend to self-describe as a virtue ethicist.
I’ve noticed at least 3 things called ‘virtue ethics’ in the wild, which are generally mashed together willy-nilly:
an empirical claim, that humans generally act according to habits of action and doing good things makes one more likely to do good things in the future, even in other domains
the notion that ethics is about being a good person and living a good life, instead of whether a particular action is permissible or leads to a good outcome
virtue as an achievement; a string of good actions can be characterized after the fact as virtuous, and that demonstrates the goodness of character.
There are virtue ethicists who buy into only some of these, but most often folks slip between them without noticing. One fellow I know will often say that #1 being false would not damage virtue ethics, because it’s really about #2 and #3 - and yet he goes on arguing in favor of virtue ethics by citing #1.
This is a great framework—very clear! Thanks!
And if it wasn’t more satisfying and useful, would you still follow it?
That’s an empirical question. Would you still subscribe to virtue ethics if you found out that humans don’t really follow habits of virtue? If so, why? If not, what would ethics be about then, and why isn’t it about that now?
My plan to get myself to purchase utilons is to find the most efficient fuzzy cause I can, then say “This is how important it is for me to save my money.” Then when the time comes to purchase utilons, I can say “You know that all-important fuzzy cause? Turns out there are even better causes out there, although they might not make you feel quite so warm inside.”
Try to wean yourself off the need for warm fuzzies instead.
EDIT: No, don’t try to wean yourself off the warm fuzzies, but get the warm fuzzies from friends and family, not from people in distress in need of charity. Feel good about yourself because you are achieving your goals, including altruistic ones. (end of edit)
Carl Rogers, founder of person centred counselling, theorised that there is an “organismic self”, with all the attributes and abilities of the human organism within its own skin, and a “self-concept” built up from what the individual saw it was desirable to be. The conscious part of the human being builds up a map of how that human being’s unconscious motivations and desires are. Part of this map is mere falsehood, lies told to make the person feel better about himself, because he has introjected the idea that this is the way he ought to be. Disparity between the map and the territory causes cognitive dissonance, and may make the need for warm fuzzies: cognitive dissonance is painful, pretending to be your own self-concept gives a warm fuzzy.
If you can make your self-concept, your map of yourself, match your organismic self, the actual territory which you may be strongly motivated to deny, then your need for warm fuzzies may reduce.
You will be more efficient if instead of buying warm fuzzies, you spend energy on utilons or signaling.
I am strongly motivated to altruism. I have decided to stop asking whether this is selfish or not. Yes, it is selfish, it fulfils My goals. No, it is not selfish, it fulfils the goals of others too. Is it “good” or “bad”? Don’t know that either. I have decided that does not matter. It is what I want, perhaps merely for signalling purposes.
I don’t recommend this but I’m interested in knowing how it works out for abigailgem.
A future post on the topic would be nice, esp after substantial movement in the direction described.
As Michael’s comment has been upvoted, I will respond. I have deluded myself a great deal, and decided some years ago to try to ferret out the lies I tell myself, and the motivation for these.
The main motivation was, “I lie to myself because I want to see myself as a Good person”.
In May 2008 I decided, “I am a human being”. I have the value of a human being. One among seven billion of us; but one evolved over four billion years, fitting beautifully into my environment, fitting into society with the attributes needed to live in society. Or some of the attributes. Or attributes needed to live in society in one way. Or something like that.
I am Good Enough.
So I want to stop morally judging myself. I am good enough. Does akrasia make me Bad? Am I not fulfilling my obligations to others? Am I Good? I have a neurotic flaw of taking such things too seriously, which makes me withdraw from action rather than taking the action I need to take.
Also, I am seeking to develop skills which reduce the effect of Akrasia, build better and deeper relationships, achieve goals. Life is Difficult. I have decided to stop beating myself up because I am not perfect at it.
I come at the problem with certain disordered personality traits.
Good call. You can only start any investigation from where you actually are, and you can only live the life you have.
Maybe one does not “overcome” bias in the sense of vanquishing, but in the sense of getting the better of? Roll with your ape?
Makes me wonder how hard-wired our various tendencies to see (or cling to) certain obscuring maps are, and how much we can obliterate, suppress, or Aikido flip them. Without much thought I feel that I’m not averse to, um, shocking my monkey if need be, to get myself closer to rational behavior. But, yeah, up to that extremity there’s doubtlessly a humongous lot of workable “therapies” or techniques to encourage rational inclination.
I will repeatedly bring up the concept of self-valuation because I believe it’s critically involved in a lot of irrationality. The pain of the cognitive dissonance caused by the “ought to be” self map differing from actuality is the pain of devaluation. Find a way for folks not to experience that aversive grief and you’ll have removed a great barrier to clearer thinking. I think it’s possible.
Upvoted for “roll with your ape”.
I’m posting here because I just saw a Facebook campaign to raise awareness of child abuse via profile pictures and chain letters. It takes little effort, and the marginal utility seemed extremely low, but I understand now that it is actually very good at generating fuzzies. As a side effect, the fuzzies provide an incentive, and it does generate a little bit of utility, so it’s not entirely a bad thing unless it causes people to fall into the “I’ve done my bit for the world” mindset.
This is a real problem. See Yvain’s Doing your good deed for the day.
Just a note: The established term for “a hypothetical unit of utility” is “util” or “utile” (typically pronounced “yootle”).
Why is it acceptable overhead to cater to primate impulses to the tune of $110,000?
I get that part of rationality is making the most of the faulty brain you have, but I’m not clear on the right way to decide which instincts to fight against, and which to placate.
And why does having more money justify spending SO more on fuzzies or status? wouldn’t it be cheaper to have a social circle of non-billionaires, and have top-dog status by being the first with a macbook air? (which you would have bought anyway)
No. MUCH more expensive. It would hurt your business interests. Really, $110,000 is an order of magnitude too little for a billionaire.
Does anyone really track the marginal utility of their possible investment this way? Utilons—sure. But ROI on status? ROI on “warm fuzzies”?
Also, this assumes we have good estimates of the ROI on all our options. Where do these estimates come from? In the real world, we often seem to spread our bets - constantly playing a game of multi-armed bandit with concept drift.
There are more basic desires as well. The warm fuzzy feeling may go towards attracting the opposite sex. I had my testosterone reduced, first through drugs, and then through castration. Did I overcome biases and become “less wrong”? Yes, I did. Any questions are welcome.
That’s a topic change. Try the Open Thread instead, or wait until your karma is high enough to post.
Hmm, so “testosterone” and the “s” word are out?
Nope, just goes in a different thread, that’s all. I’m interested in hearing about this, but not as a sudden drastic subject change from the main post.
I’m a little confused—are you treating this as an April Fools joke, or do you really want to hear about whether castration is a worthwhile technique? (Although I’d think it’d be hard to reconcile with even embyronic versions of your ‘Fun Theory’.)
I’m interested in the mental effects as data, not in trying it out at home.
What if you came to the conclusion that it (castration) would help you focus on The Thing That Shall Not Be Named and that it would thus increase your chances of success. What would you do?
;)
Lojban, is there any chance we could talk privately? My email address is zackmdavis (-at-) yahoo daht kahm.
I don’t care too much about karma, but maybe you do. I’ll try to reach you (under an assumed name of course!).
You’re assuming that time, rather than stamina, is the limiting factor to the amount of work you can get done in a given day, but if that’s so then sleeping is the hugest time waste ever. (Or that telling someone that their trunk is open costs as much stamina as the same amount of time spent on your day job.)
(And then there are acausal effects: if you hold doors open for people, then people sufficiently similar to you will hold doors open for you, and the overall time spent on walking through doors will go down. Where I come from everybody holds doors open for pretty much everybody else all the time, and if someone didn’t I’d assume they were in a hurry, didn’t see me, or were a foreigner.)
I wouldn’t state the motivation for a “diverse charity portfolio” as positively desiring warm fuzzies—rather, I think the aversion to a mixed set (note that I doubt we would usually want an only-hands-on set of charities—too much work and would feel like pushing a boulder up a hill) is about potential exhaustion at repeatedly doing the one “most efficient” thing to the point that you’re not taking 60 seconds of mental refreshment. Psychological viability is the missing element here, causing us to intuitively sense that the proposal isn’t actually best, utility calculation be as it may (the actual calc would not have such problems).
Should read “when you try to turn a single human’s life around”.
I have no idea what the actual percentage is, but I know there are people that are damned to suffering no matter how many helping hands they get. For reference, look at lottery winners who blow it all & end up back in poverty, or even in debt.
What kind of evidence would you look for to distinguish that theory from the theory that some people need a different kind of help than what they’re getting?
I get what you’re saying. I should say “people that seem to be damned to suffering”. Of course it is possible if they got the right help that it would have a positive effect. In some cases though, although it might be possible, it is highly improbable. Have you ever read about Borderline personality disorder?
I have, yes.
The help stated here doesn’t seem all that different.
Then again, if you give someone enough money to go to college, they’ll go to college. If you give them enough money to make them think they don’t need to go to college, they won’t. Perhaps giving them less money changes it.
All this is quite right, but misses an essential point:
How can we align those three goals more effectively?
How can we use the human desire for status and warm fuzzies to maximize utilons?
And on the topic, I saved a sheep Friday. I felt all warm and fuzzy, and not just from the wool, but as I rode away I noted that most of what I know about such animals involves eating them!
Is the value of an action determined from the recipient or the giver? Using the example of telling someone their trunk is open, your cost in the action is 60 seconds and the benefit to them was… what? I suppose that would depend on the rest of the context. (Was it about to rain? Valuables in the car?) The example with the lawyer has more numbers available but the starting point of “worth” needs to be determined before the correct action can be determined.
This is only slightly relevant to this post, however. If warm fuzzies are the desired benefit, the cost/benefit ratio can completely ignore the recipient. (Technically, you are the recipient?) The same goes for status.
It’s determined from the giver. The giver is implied to be altruistic, so the value is something along the lines of the net happiness the action brings.